13 research outputs found

    Personal life event detection from social media

    Get PDF
    Creating video clips out of personal content from social media is on the rise. MuseumOfMe, Facebook Lookback, and Google Awesome are some popular examples. One core challenge to the creation of such life summaries is the identification of personal events, and their time frame. Such videos can greatly benefit from automatically distinguishing between social media content that is about someone's own wedding from that week, to an old wedding, or to that of a friend. In this paper, we describe our approach for identifying a number of common personal life events from social media content (in this paper we have used Twitter for our test), using multiple feature-based classifiers. Results show that combination of linguistic and social interaction features increases overall classification accuracy of most of the events while some events are relatively more difficult than others (e.g. new born with mean precision of .6 from all three models)

    Extracting semantic entities and events from sports tweets

    Get PDF
    Large volumes of user-generated content on practically every major issue and event are being created on the microblogging site Twitter. This content can be combined and processed to detect events, entities and popular moods to feed various knowledge-intensive practical applications. On the downside, these content items are very noisy and highly informal, making it difficult to extract sense out of the stream. In this paper, we exploit various approaches to detect the named entities and significant micro-events from users’ tweets during a live sports event. Here we describe how combining linguistic features with background knowledge and the use of Twitter-specific features can achieve high, precise detection results (f-measure = 87%) in different datasets. A study was conducted on tweets from cricket matches in the ICC World Cup in order to augment the event-related non-textual media with collective intelligence

    Exploring user behavior and needs in Q & A communities

    Get PDF
    One of the difficult challenges of any knowledge centric online community is to sustain the momentum of knowledge sharing and knowledge creation effort by its members through various means. This requires a clearer understanding of user needs that drive community members to contribute, engage and stay loyal to the community. In this paper, we explore the applicability of Abraham Maslow’s theory (1943) to understand user behavior and their latent needs using Exploratory Factor analysis. Results show that users are largely driven by four main needs: social interaction, altruism cognitive need and reputation. Our results further indicate that users with high reputations are more likely to stay longer in the community than others, and that socially motivated users are responsible for increased content creation

    A lightweight web video model with content and context descriptions for integration with linked data

    Get PDF
    The rapid increase of video data on the Web has warranted an urgent need for effective representation, management and retrieval of web videos. Recently, many studies have been carried out for ontological representation of videos, either using domain dependent or generic schemas such as MPEG-7, MPEG-4, and COMM. In spite of their extensive coverage and sound theoretical grounding, they are yet to be widely used by users. Two main possible reasons are the complexities involved and a lack of tool support. We propose a lightweight video content model for content-context description and integration. The uniqueness of the model is that it tries to model the emerging social context to describe and interpret the video. Our approach is grounded on exploiting easily extractable evolving contextual metadata and on the availability of existing data on the Web. This enables representational homogeneity and a firm basis for information integration among semantically-enabled data sources. The model uses many existing schemas to describe various ontology classes and shows the scope of interlinking with the Linked Data cloud

    Semantic Retrieval With Spreading Activation

    No full text
    This thesis is mainly concerned with information retrieval with constrained spreading activation in semantic domain of data. Spreading activation has long been a subject of research studies in Artificial Intelligence, cognitive psychological studies. Spreading activation imitates the cognitive aspects of human memory in computational application. In this research we study the effect of spreading activation on a semantic model of data

    A Social Context Enabled Framework for Semantic Enrichment and Integration of Web Videos

    Get PDF
    The automatic inference of video semantics is an important but highly challenging problem whose solution can greatly contribute towards the annotation, retrieval, personalisation and reusability of video on the web. From a semantic annotation and retrieval perspective, this thesis investigates the influence of multiple video contexts for inferring video semantics, specifically aiming to improve video tagging and content description. The objective of the thesis is two- fold 1) formalising the representation of a video and its content via an ontological model and 2) inferring concepts to augment the model. First, a lightweight conceptual model of a video is proposed to describe a video object at four different structural abstractions (video, shot, frame, image region) and four different meta information categories (media, content feature, content semantics and context), and second, we investigate an ensemble of methods to infer video semantics from multiple contextual sources in order to augment the above model. The study showed that contextual sources positively contribute in understanding the ¿aboutness¿ of a video and one can discover many descriptive concepts not originally described by the creator. Experimenting with different contextual sources showed that contexts contributed semantic enrichment is not restricted to document level video annotations, but can go further and be used to localise the detected entities inside the video timeline for a fine grained time-stamped annotation. In all studies we found that a combination of cues results in robust concept detection compared to the cues in isolation. We evaluated our approaches using both quantitative analysis and user based qualitative feedback. The principal benefits of the context based approach over a content based approach is that it is computationally inexpensive, maximises the wisdom of crowds and easily adaptable across domains. Finally we built an integrated \u27Annotate, Search and Browse\u27 prototype building over the proposed framework that supports complex structured queries, ontology based concept querying, temporal segment querying as well as the normal keyword search
    corecore